skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kumar, Sachin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 1, 2025
  2. Diffusion-based language models are emerging as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced controllability at inference time. While autoregressive LMs have benefited immensely from scaling and instruction-based learning, existing studies of diffusion LMs have been conducted on a smaller scale. Starting with a recently proposed diffusion model SSD-LM, in this work we first explore methods to scale it from 0.4B to 13B parameters, proposing techniques to improve its training and inference efficiency, and to finetune the model to follow instructions. Armed with a more powerful, general purpose diffusion LM, we introduce the primary contribution of this work – SSD-2 – an approach to easily ensemble at inference time a large general-purpose diffusion LM with smaller, but specialized and contextualized diffusion LMs. We show that SSD-2 facilitates novel ensembles with 100x smaller models that can be customized and deployed by individual users. We find that compared to autoregressive models, the collaboration between diffusion LMs is more effective, leading to higher-quality model responses due to their ability to dynamically incorporate bi-directional contexts. 
    more » « less
  3. Language model (LM) prompting—a popular paradigm for solving NLP tasks—has been shown to be susceptible to miscalibration and brittleness to slight prompt variations, caused by its discriminative prompting approach, i.e., predicting the label given the input. To address these issues, we propose Gen-Z—a generative prompting framework for zero-shot text classification. GEN-Z is generative, as it measures the LM likelihood of input text, conditioned on natural language descriptions of labels. The framework is multivariate, as label descriptions allow us to seamlessly integrate additional contextual information about the labels to improve task performance. On various standard classification benchmarks, with six open-source LM families, we show that zero-shot classification with simple contextualization of the data source of the evaluation set consistently outperforms both zero-shot and few-shot baselines while improving robustness to prompt variations. Further, our approach enables personalizing classification in a zero-shot manner by incorporating author, subject, or reader information in the label descriptions. 
    more » « less
  4. The extracellular matrix (ECM) is a dynamic and complex microenvironment that modulates cell behavior and cell fate. Changes in ECM composition and architecture have been correlated with development, differentiation, and disease progression in various pathologies, including breast cancer [1]. Studies have shown that aligned fibers drive a pro-metastatic microenvironment, promoting the transformation of mammary epithelial cells into invasive ductal carcinomaviathe epithelial-to-mesenchymal transition (EMT) [2]. The impact of ECM orientation on breast cancer metabolism, however, is largely unknown. Here, we employ two non-invasive imaging techniques, fluorescence-lifetime imaging microscopy (FLIM) and intensity-based multiphoton microscopy, to assess the metabolic states of cancer cells cultured on ECM-mimicking nanofibers in a random and aligned orientation. By tracking the changes in the intrinsic fluorescence of nicotinamide adenine dinucleotide and flavin adenine dinucleotide, as well as expression levels of metastatic markers, we reveal how ECM fiber orientation alters cancer metabolism and EMT progression. Our study indicates that aligned cellular microenvironments play a key role in promoting metastatic phenotypes of breast cancer as evidenced by a more glycolytic metabolic signature on nanofiber scaffolds of aligned orientation compared to scaffolds of random orientation. This finding is particularly relevant for subsets of breast cancer marked by high levels of collagen remodeling (e.g. pregnancy associated breast cancer), and may serve as a platform for predicting clinical outcomes within these subsets [3–6]. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  5. In this work, we take a first step towards designing summarization systems that are faithful to the author’s intent, not only the semantic content of the article. Focusing on a case study of preserving political perspectives in news summarization, we find that existing approaches alter the political opinions and stances of news articles in more than 50% of summaries, misrepresenting the intent and perspectives of the news authors. We thus propose P3Sum, a diffusion model-based summarization approach controlled by political perspective classifiers. In P3Sum, the political leaning of a generated summary is iteratively evaluated at each decoding step, and any drift from the article’s original stance incurs a loss back-propagated to the embedding layers, steering the political stance of the summary at inference time. Extensive experiments on three news summarization datasets demonstrate that P3Sum outperforms state-of-the-art summarization systems and large language models by up to 13.7% in terms of the success rate of stance preservation, with competitive performance on standard metrics of summarization quality. Our findings present a first analysis of preservation of pragmatic features in summarization, highlight the lacunae in existing summarization models—that even state-of-the-art models often struggle to preserve author’s intents—and develop new summarization systems that are more faithful to author’s perspectives. 
    more » « less
  6. Abstract The in-flight reduction of iron ore particles using an atmospheric pressure hydrogen plasma is investigated. Iron ore particles with a size less than 75 µm are aerosolized and carried with an argon-hydrogen (90%–10%) gas mixture through an atmospheric pressure microwave plasma. After the treatment, the collected particles are observed to follow three distinct populations: (i) fully reduced nanoparticles, (ii) partially reduced spheres, larger than the feedstock, and (iii) partially melted, partly reduced agglomerates. A model is developed to explain the possible mechanism for the origin of the three populations. The nanoparticles (i) are found to be likely formed from the previously evaporated material whereas the particles (ii) and (iii) result from the partial/complete melting of the particles and agglomerates flowing through the reactor. The gas temperature is estimated to be more than 2000 K, which enables the rapid melting, evaporation, and reduction of these particles within residence times of only a few 10 ms. 
    more » « less
  7. Despite the growing success of diffusion models in continuous-valued domains (e.g., images), similar efforts for discrete domains such as text have yet to match the performance of autoregressive language models. In this work, we present SSD-LM—a diffusion-based language model with two key design choices. First, SSD-LM is semi-autoregressive, iteratively generating blocks of text, allowing for flexible output length at decoding time while enabling local bidirectional context updates. Second, it is simplex-based, performing diffusion on the natural vocabulary space rather than a learned latent space, allowing us to incorporate classifier guidance and modular control using off-the-shelf classifiers without any adaptation. We evaluate SSD-LM on unconstrained text generation benchmarks, and show that it matches or outperforms strong autoregressive GPT-2 models across standard quality and diversity metrics, while vastly outperforming diffusion-based baselines. On controlled text generation, SSD-LM also outperforms competitive baselines, with an extra advantage in modularity. 
    more » « less
  8. Recent advances in the capacity of large language models to generate human-like text have resulted in their increased adoption in user-facing settings. In parallel, these improvements have prompted a heated discourse around the risks of societal harms they introduce, whether inadvertent or malicious. Several studies have explored these harms and called for their mitigation via development of safer, fairer models. Going beyond enumerating the risks of harms, this work provides a survey of practical methods for addressing potential threats and societal harms from language generation models. We draw on several prior works’ taxonomies of language model risks to present a structured overview of strategies for detecting and ameliorating different kinds of risks/harms of language generators. Bridging diverse strands of research, this survey aims to serve as a practical guide for both LM researchers and practitioners, with explanations of different strategies’ motivations, their limitations, and open problems for future research. 
    more » « less
  9. In this work, we explore a useful but often neglected methodology for robustness analysis of text generation evaluation metrics: stress tests with synthetic data. Basically, we design and synthesize a wide range of potential errors and check whether they result in a commensurate drop in the metric scores. We examine a range of recently proposed evaluation metrics based on pretrained language models, for the tasks of open-ended generation, translation, and summarization. Our experiments reveal interesting insensitivities, biases, or even loopholes in existing metrics. For example, we find that BERTScore is confused by truncation errors in summarization, and MAUVE (built on top of GPT-2) is insensitive to errors at the beginning or middle of generations. Further, we investigate the reasons behind these blind spots and suggest practical workarounds for a more reliable evaluation of text generation. We have released our code and data at https://github.com/cloudygoose/blindspot_nlg. 
    more » « less